34 research outputs found

    Traffic and Resource Management in Robust Cloud Data Center Networks

    Get PDF
    Cloud Computing is becoming the mainstream paradigm, as organizations, both large and small, begin to harness its benefits. Cloud computing gained its success for giving IT exactly what it needed: The ability to grow and shrink computing resources, on the go, in a cost-effective manner, without the anguish of infrastructure design and setup. The ability to adapt computing demands to market fluctuations is just one of the many benefits that cloud computing has to offer, this is why this new paradigm is rising rapidly. According to a Gartner report, the total sales of the various cloud services will be worth 204 billion dollars worldwide in 2016. With this massive growth, the performance of the underlying infrastructure is crucial to its success and sustainability. Currently, cloud computing heavily depends on data centers for its daily business needs. In fact, it is through the virtualization of data centers that the concept of "computing as a utility" emerged. However, data center virtualization is still in its infancy; and there exists a plethora of open research issues and challenges related to data center virtualization, including but not limited to, optimized topologies and protocols, embedding design methods and online algorithms, resource provisioning and allocation, data center energy efficiency, fault tolerance issues and fault tolerant design, improving service availability under failure conditions, enabling network programmability, etc. This dissertation will attempt to elaborate and address key research challenges and problems related to the design and operation of efficient virtualized data centers and data center infrastructure for cloud services. In particular, we investigate the problem of scalable traffic management and traffic engineering methods in data center networks and present a decomposition method to exactly solve the problem with considerable runtime improvement over mathematical-based formulations. To maximize the network's admissibility and increase its revenue, cloud providers must make efficient use of their's network resources. This goal is highly correlated with the employed resource allocation/placement schemes; formally known as the virtual network embedding problem. This thesis looks at multi-facets of this latter problem; in particular, we study the embedding problem for services with one-to-many communication mode; or what we denote as the multicast virtual network embedding problem. Then, we tackle the survivable virtual network embedding problem by proposing a fault-tolerance design that provides guaranteed service continuity in the event of server failure. Furthermore, we consider the embedding problem for elastic services in the event of heterogeneous node failures. Finally, in the effort to enable and support data center network programmability, we study the placement problem of softwarized network functions (e.g., load balancers, firewalls, etc.), formally known as the virtual network function assignment problem. Owing to its combinatorial complexity, we propose a novel decomposition method, and we numerically show that it is hundred times faster than mathematical formulations from recent existing literature

    Les défis de l'émulation Wi-Fi à base des traces

    Get PDF
    Wi-Fi link is unpredictable and it has never been easy to measure it perfectly; there is always bound to be some bias. As wireless becomes the medium of choice, it is useful to capture Wi-Fi traces in order to evaluate, tune, and adapt the different applications and protocols. Several methods have been used for the purpose of experimenting with different wireless conditions: simulation, experimentation, and trace-driven emulation. In this paper, we argue that trace-driven emulation is the most favourable approach. In the absence of a trace-driven emulation tool for Wi-Fi, we evaluate the state-of-the-art trace driven emulation tool for Cellular networks and we identify issues for Wi-Fi: interference with concurrent traffic, interference with its own traffic if measurements are done on both uplink and downlink simultaneously , and packet loss. We provide a solid argument as to why this tool falls short from effectively capturing Wi-Fi traces. The outcome of our analysis guides us to propose a number of suggestions on how the existing tool can be tweaked to accurately capture Wi-Fi traces.La liaison Wi-Fi est imprévisible et il n'a jamais été facile de la mesurer parfaitement ; il y a toujours un risque de biais. Comme le sans fil devient le moyen de communication de choix, il est utile de capturer les traces Wi-Fi afin d'évaluer, de régler et d'adapter les différentes applications et protocoles. Plusieurs méthodes ont été utilisées pour expérimenter différentes conditions sans fil : la simulation, l'expérimentation et l'émulation de traces. Dans cet article, nous soutenons que l'émulation pilotée par les traces est l'approche la plus favorable. En l'absence d'un outil d'émulation piloté par trace pour le Wi-Fi, nous évaluons l'outil d'émulation piloté par trace de pointe pour les réseaux cellulaires et nous identifions les problèmes pour le Wi-Fi : interférence avec le trafic concurrent, interférence avec son propre trafic si les mesures sont effectuées simultanément sur la liaison montante et la liaison descendante, et perte de paquets. Nous fournissons un argument solide pour expliquer pourquoi cet outil ne parvient pas à capturer efficacement les traces Wi-Fi. Le résultat de notre analyse nous guide pour proposer un certain nombre de suggestions sur la manière dont l'outil existant peut être modifié pour capturer avec précision les traces Wi-Fi

    Inferring Streaming Video Quality from Encrypted Traffic: Practical Models and Deployment Experience

    Get PDF
    Inferring the quality of streaming video applications is important for Internet service providers, but the fact that most video streams are encrypted makes it difficult to do so. We develop models that infer quality metrics (\ie, startup delay and resolution) for encrypted streaming video services. Our paper builds on previous work, but extends it in several ways. First, the model works in deployment settings where the video sessions and segments must be identified from a mix of traffic and the time precision of the collected traffic statistics is more coarse (\eg, due to aggregation). Second, we develop a single composite model that works for a range of different services (i.e., Netflix, YouTube, Amazon, and Twitch), as opposed to just a single service. Third, unlike many previous models, the model performs predictions at finer granularity (\eg, the precise startup delay instead of just detecting short versus long delays) allowing to draw better conclusions on the ongoing streaming quality. Fourth, we demonstrate the model is practical through a 16-month deployment in 66 homes and provide new insights about the relationships between Internet "speed" and the quality of the corresponding video streams, for a variety of services; we find that higher speeds provide only minimal improvements to startup delay and resolution

    Lightweight, General Inference of Streaming Video Quality from Encrypted Traffic

    Get PDF
    Accurately monitoring application performance is becoming more important for Internet Service Providers (ISPs), as users increasingly expect their networks to consistently deliver acceptable application quality. At the same time, the rise of end-to-end encryption makes it difficult for network operators to determine video stream quality-including metrics such as startup delay, resolution, rebuffering, and resolution changes-directly from the traffic stream. This paper develops general methods to infer streaming video quality metrics from encrypted traffic using lightweight features. Our evaluation shows that our models are not only as accurate as previous approaches , but they also generalize across multiple popular video services, including Netflix, YouTube, Amazon Instant Video, and Twitch. The ability of our models to rely on lightweight features points to promising future possibilities for implementing such models at a variety of network locations along the end-to-end network path, from the edge to the core

    MINTED: Multicast virtual network embedding in cloud data centers with delay constraints

    No full text
    Network virtualization is regarded as the pillar of cloud computing, enabling the multi-tenancy concept where multiple Virtual Networks (VNs) can cohabit the same substrate network. With network virtualization, the problem of allocating resources to the various tenants, commonly known as the Virtual Network Embedding problem, emerges as a challenge. Its NP-Hard nature has drawn a lot of attention from the research community, many of which however overlooked the type of communication that a given VN may exhibit, assuming that they all exhibit a one-to-one (unicast) communication only. In this paper, we motivate the importance of characterizing the mode of communication in VN requests, and we focus our attention on the problem of embedding VNs with a one-to-many (multicast) communication mode. Throughout this paper, we highlight the unique properties of multicast VNs and its distinct Quality of Service (QoS) requirements, most notably the end-delay and delay-variation constraints for delay-sensitive multicast services. Further, we showcase the limitations of handling a multicast VN as unicast. To this extent, we formally define the VNE problem for Multicast VNs (MVNs) and prove its NP-Hard nature. We propose two novel approach to solve the Multicast VNE (MVNE) problem with end-delay and delay variation constraints: A 3-Step MVNE technique, and a Tabu-Search algorithm. We motivate the intuition behind our proposed embedding techniques, and provide a competitive analysis of our suggested approaches over multiple metrics and against other embedding heuristics. 1972-2012 IEEE.Scopu

    Optimal Polynomial Time Algorithm for Restoring Multicast Cloud Services

    No full text
    The failure-prone nature of data center networks has evoked countless contributions to develop proactive and reactive countermeasures. Yet, most of these techniques were developed with unicast services in mind. When in fact, multiple services hosted in data center networks today rely on multicast communication to disseminate traffic. Hence, the existing survivability schemes fail to cater to the distinctive properties and quality of service requirements that multicast services entail. This letter is devoted to understanding the ramifications of facility node or substrate link failure on multicast services residing in cloud networks. We formally define the multicast virtual network restoration problem and prove its NP-complete nature in arbitrary graphs. Furthermore, we prove that the problem can be solved in polynomial-time in multi-rooted treelike data center network topologies. 2016 IEEE.Scopu

    Traffic engineering in cloud data centers: A column generation approach

    No full text
    While many have advocated for the use of Virtual Local Area Networks (VLANs) as a way to provide scalable traffic management, finding the optimal traffic split (mapping) among VLANs to achieve load balancing has turned out to be a very challenging and combinatorially complex problem to solve. This paper considers the traffic engineering problem in data center networks by studying the joint problem of finding spanning trees for VLANs and optimally selecting the most promising spanning trees to map the traffic flows onto. We mathematically model this problem using Integer Linear Program (ILP) techniques and follow a primal-dual decomposition approach, using column generation, to solve exactly a relaxed mapping version of the problem, as well we present approximate solutions to the original problem. We show through numerical evaluations an outstanding scalability of the decomposed version of the problem and we use our results to study the performance of traffic engineering protocols developed in recent literature for data center networks.NPRP 5-137-2-045 grant from the Qatar National Research Fund (a member of the Qatar Foundation).Scopu

    Towards scalable traffic management in cloud data centers

    No full text
    Cloud Computing is becoming a mainstream paradigm, as organizations, large and small, begin to harness its benefits. This novel technology brings new challenges, mostly in the protocols that govern its underlying infrastructure. Traffic engineering in cloud data centers is one of these challenges that has attracted attention from the research community, particularly since the legacy protocols employed in data centers offer limited and unscalable traffic management. Many advocated for the use of VLANs as a way to provide scalable traffic management, however, finding the optimal traffic split between VLANs is the well known NP-Complete VLAN assignment problem. The size of the search space of the VLAN assignment problem is huge, even for small size networks. This paper introduce a novel decomposition approach to solve the VLAN mapping problem in cloud data centers through column generation. Column generation is an effective technique that is proven to reach optimality by exploring only a small subset of the search space. We introduce both an exact and a semi-heuristic decomposition with the objective to achieve load balancing by minimizing the maximum link load in the network. Our numerical results have shown that our approach explores less than 1% of the available search space, with an optimality gap of at most 4%. We have also compared and assessed the performance of our decomposition model and state of the art protocols in traffic engineering. This comparative analysis proves that our model attains encouraging gain over its peers. 2014 IEEE.Scopu
    corecore